135 research outputs found
Outcomes of surgery for patients with Behcet’s disease causing aortic pseudoaneurysm: a shift from open surgery to endovascular repair
OBJECTIVES: Behcet’s disease is a form of systematic vasculitis that affects vessels of various sizes. Aortic pseudoaneurysm is one of the most important causes of death among patients with Behcet’s disease due to its high risk of rupture and associated mortality. Our study aimed to investigate the outcomes of Behcet’s disease patients with aortic pseudoaneurysms undergoing open surgery and endovascular aortic repair. METHODS: From January 2003 to September 2014, ten consecutive patients undergoing surgery for aortic pseudoaneurysm met the diagnostic criteria for Behcet’s disease. Endovascular repair was the preferred modality and open surgery was performed as an alternative. Systemic immunosuppressive medication was administered after Behcet’s disease was definitively diagnosed. RESULTS: Eight patients initially underwent endovascular repair and two patients initially underwent open surgery. The overall success rate was 90% and the only failed case involved the use of the chimney technique to reach a suprarenal location. The median follow-up duration was 23 months. There were 7 recurrences in 5 patients. The median interval between operation and recurrence was 13 months. No significant risk factors for recurrence were identified, but a difference in recurrence between treatment and non-treatment with preoperative immunosuppressive medication preoperatively was notable. Four aneurysm-related deaths occurred within the follow-up period. The overall 1-year, 3-year and 5-year survival rates were 80%, 64% and 48%, respectively. CONCLUSIONS: Both open surgery and endovascular repair are safe and effective for treating aortic pseudoaneurysm in Behcet’s disease patients. The results from our retrospective study indicated that immunosuppressive medication was essential to defer the occurrence and development of recurrent aneurysms
ConPET: Continual Parameter-Efficient Tuning for Large Language Models
Continual learning necessitates the continual adaptation of models to newly
emerging tasks while minimizing the catastrophic forgetting of old ones. This
is extremely challenging for large language models (LLMs) with vanilla
full-parameter tuning due to high computation costs, memory consumption, and
forgetting issue. Inspired by the success of parameter-efficient tuning (PET),
we propose Continual Parameter-Efficient Tuning (ConPET), a generalizable
paradigm for continual task adaptation of LLMs with task-number-independent
training complexity. ConPET includes two versions with different application
scenarios. First, Static ConPET can adapt former continual learning methods
originally designed for relatively smaller models to LLMs through PET and a
dynamic replay strategy, which largely reduces the tuning costs and alleviates
the over-fitting and forgetting issue. Furthermore, to maintain scalability,
Dynamic ConPET adopts separate PET modules for different tasks and a PET module
selector for dynamic optimal selection. In our extensive experiments, the
adaptation of Static ConPET helps multiple former methods reduce the scale of
tunable parameters by over 3,000 times and surpass the PET-only baseline by at
least 5 points on five smaller benchmarks, while Dynamic ConPET gains its
advantage on the largest dataset. The codes and datasets are available at
https://github.com/Raincleared-Song/ConPET.Comment: 12 pages, 3 figures. This work has been submitted to the IEEE for
possible publication. Copyright may be transferred without notice, after
which this version may no longer be accessibl
- …